24 research outputs found

    Active Structure from Motion for Spherical and Cylindrical Targets

    Get PDF
    International audienceStructure estimation from motion (SfM) is a classical and well-studied problem in computer and robot vision, and many solutions have been proposed to treat it as a recursive filtering/estimation task. However, the issue of actively optimizing the transient response of the SfM estimation error has not received a comparable attention. In this paper, we provide an experimental validation of a recently proposed nonlinear active SfM strategy via two concrete applications: 3D structure estimation for a spherical and a cylindrical target. The experimental results fully support the theoretical analysis and clearly show the benefits of the proposed active strategy. Indeed, by suitably acting on the camera motion and estimation gains, it is possible to assign the error transient response and make it equivalent to that of a reference linear second-order system with desired poles

    A Real-Time Game Theoretic Planner for Autonomous Two-Player Drone Racing

    Full text link
    To be successful in multi-player drone racing, a player must not only follow the race track in an optimal way, but also compete with other drones through strategic blocking, faking, and opportunistic passing while avoiding collisions. Since unveiling one's own strategy to the adversaries is not desirable, this requires each player to independently predict the other players' future actions. Nash equilibria are a powerful tool to model this and similar multi-agent coordination problems in which the absence of communication impedes full coordination between the agents. In this paper, we propose a novel receding horizon planning algorithm that, exploiting sensitivity analysis within an iterated best response computational scheme, can approximate Nash equilibria in real time. We also describe a vision-based pipeline that allows each player to estimate its opponent's relative position. We demonstrate that our solution effectively competes against alternative strategies in a large number of drone racing simulations. Hardware experiments with onboard vision sensing prove the practicality of our strategy

    An Active Strategy for Plane Detection and Estimation with a Monocular Camera

    Get PDF
    International audiencePlane detection and estimation from visual data is a classical problem in robotic vision. In this work we propose a novel active strategy in which a monocular camera tries to determine whether a set of observed point features belongs to a common plane, and, if so, what are the associated plane parameters. The active component of the strategy imposes an optimized camera motion (as a function of the observed scene) able to maximize the convergence in estimating the scene structure. Based on this strategy, two methods are then proposed to solve the plane estimation task: a classical solution exploiting the homography constraint (and, thus, almost com- pletely based on image correspondances across distant frames), and an alternative method fully taking advantage of the scene structure estimated incrementally during the camera motion. The two methods are extensively compared in several case studies by discussing the various pros/cons

    Learning the Shape of Image Moments for Optimal 3D Structure Estimation

    Get PDF
    International audience— The selection of a suitable set of visual features for an optimal performance of closed-loop visual control or Structure from Motion (SfM) schemes is still an open problem in the visual servoing community. For instance, when considering integral region-based features such as image moments, only heuristic, partial, or local results are currently available for guiding the selection of an appropriate moment set. The goal of this paper is to propose a novel learning strategy able to automatically optimize online the shape of a given class of image moments as a function of the observed scene for improving the SfM performance in estimating the scene structure. As case study, the problem of recovering the (unknown) 3D parameters of a planar scene from measured moments and known camera motion is considered. The reported simulation results fully confirm the soundness of the approach and its superior performance over more consolidated solutions in increasing the information gain during the estimation task

    An Open-Source Hardware/Software Architecture for Quadrotor UAVs

    Get PDF
    International audienceIn this paper, we illustrate an open-source ready-to-use hardware/software archi- tecture for a quadrotor UAV. The presented platform is price effective, highly customizable, and easily exploitable by other researchers involved in high-level UAV control tasks and for educational purposes as well. The use of object-oriented programming and full support of Robot Operating System (ROS) and Matlab Simulink allows for an efficient customization, code reuse, functionality expansion and rapid prototyping of new algorithms. We provide an extensive illustration of the various UAV components and a thorough description of the main basic algorithms and calibration procedures. Finally, we present some experimental case studies aimed at showing the effectiveness of the proposed architecture

    Contributions à la perception active et à la commande de systèmes robotiques

    Get PDF
    As every scientist and engineer knows, running an experiment requires a careful and thorough planning phase. The goal of such a phase is to ensure that the experiment will give the scientist as much information as possible about the process that she/he is observing so as to minimize the experimental effort (in terms of, e.g., number of trials, duration of each experiment and so on) needed to reach a trustworthy conclusion. Similarly, perception is an active process in which the perceiving agent (be it a human, an animal or a robot) tries its best to maximize the amount of information acquired about the environment using its limited sensor capabilities and resources. In many sensor-based robot applications, the state of a robot can only be partially retrieved from his on-board sensors. State estimation schemes can be exploited for recovering online the “missing information” then fed to any planner/motion controller in place of the actual unmeasurable states. When considering non-trivial cases, however, state estimation must often cope with the nonlinear sensor mappings from the observed environment to the sensor space that make the estimation convergence and accuracy strongly affected by the particular trajectory followed by the robot/sensor. For instance, when relying on vision-based control techniques, such as Image-Based Visual Servoing (IBVS), some knowledge about the 3-D structure of the scene is needed for a correct execution of the task. However, this 3-D information cannot, in general, be extracted from a single camera image without additional assumptions on the scene. One can exploit a Structure from Motion (SfM) estimation process for reconstructing this missing 3-D information. However performance of any SfM estimator is known to be highly affected by the trajectory followed by the camera during the estimation process, thus creating a tight coupling between camera motion (needed to, e.g., realize a visual task) and performance/accuracy of the estimated 3-D structure. In this context, a main contribution of this thesis is the development of an online trajectory optimization strategy that allows maximization of the converge rate of a SfM estimator by (actively) affecting the camera motion. The optimization is based on the classical persistence of excitation condition used in the adaptive control literature to characterize the well-posedness of an estimation problem. This metric, however, is also strongly related to the Fisher information matrix employed in probabilistic estimation frameworks for similar purposes. We also show how this technique can be coupled with the concurrent execution of a IBVS task using appropriate redundancy resolution and maximization techniques. All of the theoretical results presented in this thesis are validated by an extensive experimental campaign run using a real robotic manipulator equipped with a camera in-hand.L'exécution d'une expérience scientifique est un processus qui nécessite une phase de préparation minutieuse et approfondie. Le but de cette phase est de s'assurer que l'expérience donne effectivement le plus de renseignements possibles sur le processus que l'on est en train d'observer, de manière à minimiser l'effort (en termes, par exemple, du nombre d'essais ou de la durée de chaque expérience) nécessaire pour parvenir à une conclusion digne de confiance. De manière similaire, la perception est un processus actif dans lequel l'agent percevant (que ce soit un humain, un animal ou un robot) fait de son mieux pour maximiser la quantité d'informations acquises sur l'environnement en utilisant ses capacités de détection et ses ressources limitées. Dans de nombreuses applications robotisées, l'état d'un robot peut être partiellement récupéré par ses capteurs embarqués. Des schémas d'estimation peuvent être exploités pour récupérer en ligne les «informations manquantes» et les fournir à des planificateurs/contrôleurs de mouvement, à la place des états réels non mesurables. Cependant, l'estimation doit souvent faire face aux relations non linéaires entre l'environnement et les mesures des capteurs qui font que la convergence et la précision de l'estimation sont fortement affectées par la trajectoire suivie par le robot/capteur. Par exemple, les techniques de commande basées sur la vision, telles que l'Asservissement Visuel Basé-Image (IBVS), exigent normalement une certaine connaissance de la structure 3-D de la scène qui ne peut pas être extraite directement à partir d'une seule image acquise par la caméra. On peut exploiter un processus d'estimation (“Structure from Motion - SfM”) pour reconstruire ces informations manquantes. Toutefois, les performances d'un estimateur SfM sont grandement affectées par la trajectoire suivie par la caméra pendant l'estimation, créant ainsi un fort couplage entre mouvement de la caméra (nécessaire pour, par exemple, réaliser une tâche visuelle) et performance/précision de l'estimation 3-D. À cet égard, une contribution de cette thèse est le développement d'une stratégie d'optimisation en ligne de trajectoire qui permet de maximiser le taux de convergence d'un estimateur SfM affectant (activement) le mouvement de la caméra. L'optimisation est basée sur des conditions classiques de persistance d'excitation utilisée en commande adaptative pour caractériser le conditionnement d'un problème d'estimation. Cette mesure est aussi fortement liée à la matrice d'information de Fisher employée dans le cadre d'estimation probabiliste à des fins similaires. Nous montrons aussi comment cette technique peut être couplé avec l'exécution simultanée d'une tâche d'asservissement visuel en utilisant des techniques de résolution et de maximisation de la redondance. Tous les résultats théoriques présentés dans cette thèse sont validés par une vaste campagne expérimentale en utilisant un robot manipulateur équipé d'une caméra embarquée

    Active Decentralized Scale Estimation for Bearing-Based Localization

    Get PDF
    International audience— In this paper, we propose a novel decentralized active perception strategy that maximizes the convergence rate in estimating the (unmeasurable) formation scale in the context of bearing-based formation localization for robots evolving in R 3 × S 1. The proposed algorithm does not assume presence of a global reference frame and only requires bearing-rigidity of the formation (for the localization problem to admit a unique solution), and presence of (at least) one pair of robots in mutual visibility. Two different scenarios are considered in which the active scale estimation problem is treated either as a primary task or as a secondary objective with respect to the constraint of attaining a desired bearing formation. The theoretical results are validated by realistic simulations

    A Framework for Active Estimation: Application to Structure from Motion

    Get PDF
    Abstract — State estimation is a fundamental and challenging problem in many applications involving planning and control, in particular when dealing with systems exhibiting nonlinear dynamics. While the design of nonlinear observers is an active research field, the issue of optimizing over time the transient response of the estimation error has not received, to the best of our knowledge, a comparable attention. In this paper, an active strategy for tuning the transient response of a particular class of nonlinear observers is discussed. This is achieved by suitably acting on the estimation gains and on the inputs applied to the system under observation. The theory is validated by simulation results applied to two visual estimation tasks (Structure from Motion — SfM). I

    Coupling Visual Servoing with Active Structure from Motion

    Get PDF
    International audienceIn this paper we propose a solution for coupling the execution of a visual servoing task with a recently developed active Structure from Motion strategy able to optimize online the convergence rate in estimating the (unknown) 3D structure of the scene. This is achieved by suitably modifying the robot trajectory in the null-space of the servoing task so as to render the camera motion 'more informative' w.r.t. the states to be estimated. As a byproduct, the better 3D structure estimation also improves the evaluation of the servoing interaction matrix which, in turn, results in a better closed-loop convergence of the task itself. The reported experimental results support the theoretical analysis and show the benefits of the method
    corecore